16 research outputs found

    Nonlinear Conjugate Gradients Algorithm For 2-D Magnetotelluric Inversion

    Get PDF
    We investigate a new algorithm for computing regularized solutions of the two-dimensional magnetotelluric inverse problem. The algorithm employs a nonlinear conjugate gradients (NLCG) scheme to minimize an objective function that penalizes data residuals and second spatial derivatives of resistivity. We compare this algorithm theoretically and numerically to two previous algorithms for constructing such 'minimum-structure' models: the Gauss-Newton method, which solves a sequence of linearized inverse problems and has been the standard approach to nonlinear inversion in geophysics, and an algorithm due to Mackie and Madden, which solves a sequence of linearized inverse problems incompletely using a (linear) conjugate gradients technique. Numerical experiments involving synthetic and field data indicate that the two algorithms based on conjugate gradients (NLCG and Mackie-Madden) are more efficient than the GaussNewton algorithm in terms of both computer memory requirements and CPU time needed to find accurate solutions to problems of realistic size. This owes largely to the fact that the conjugate gradients-based algorithms avoid two computationally intensive tasks that are performed at each step of a Gauss-Newton iteration: calculation of the full Jacobian matrix of the forward modeling operator, and complete solution of a linear system on the model space. The numerical tests also show that the Mackie-Madden algorithm reduces the objective function more quickly than our new NLCG algorithm in the early stages of minimization, but NLCG is more effective in the later computations. To help understand these results, we describe the Mackie-Madden and new NLCG algorithms in detail and couch each as a special case of a more general conjugate gradients scheme for nonlinear inversion

    Sensitivity Analysis Of Amplitude Variation With Offset (Avo) In Fractured Media

    Get PDF
    The variation in seismic P to P reflection amplitude with offset (AVO) caused by a system of fractures embedded in an isotropic background is investigated. Additionally. a sensitivity analysis of AVO parameters with respect to the fracture system parameters is made. The fracture system is assumed to be aligned vertically or horizontally and can be gas filled or fluid filled. Elastic constants are calculated by using formulations of Schoenberg (1988). From the elastic constants, the reflection amplitude as a function of angle is calculated using equations from Ruger (1997). Theoretical results for a single interface between fractured and unfractured media, both with and without lithology change, show opportunities for extraction of crack density information from seismic P-wave data collected in fractured geothermal or hydrocarbon reservoirs. For vertically oriented fractures, wide angle data (> 30°) is crucial for the estimation of fracture parameters.Massachusetts Institute of Technology. Borehole Acoustics and Logging ConsortiumMassachusetts Institute of Technology. Earth Resources Laboratory. Reservoir Delineation Consortiu

    Simultaneous Estimation of Reflectivity and Geologic Texture: Least-Squares Migration with a Hierarchical Bayesian Model

    Get PDF
    In many geophysical inverse problems, smoothness assumptions on the underlying geology are utilized to mitigate the effects of poor resolution and noise in the data and to improve the quality of the inferred model parameters. Within a Bayesian inference framework, a priori assumptions about the probabilistic structure of the model parameters impose such a smoothness constraint or regularization. We consider the particular problem of inverting seismic data for the subsurface reflectivity of a 2-D medium, where we assume a known velocity field. In particular, we consider a hierarchical Bayesian generalization of the Kirchhoff-based least-squares migration (LSM) problem. We present here a novel methodology for estimation of both the optimal image and regularization parameters in a least-squares migration setting. To do so we utilize a Bayesian statistical framework that treats both the regularization parameters and image parameters as random variables to be inferred from the data. Hence rather than fixing the regularization parameters prior to inverting for the image, we allow the data to dictate where to regularize. In order to construct our prior model of the subsurface and regularization parameters, we define an undirected graphical model (or Markov random field) where vertices represent reflectivity values, and edges between vertices model the degree of correlation (or lack thereof) between the vertices. Estimating optimal values for the vertex parameters gives us an image of the subsurface reflectivity, while estimating optimal edge strengths gives us information about the local “texture” of the image, which, in turn, may tell us something about the underlying geology. Subsequently incorporating this information in the final model produces more clearly visible discontinuities in the final image. The inference framework is verified on a 2-D synthetic dataset, where the hierarchical Bayesian imaging results significantly outperform standard LSM images.Shell International Exploration and Production B.V.; Massachusetts Institute of Technology. Earth Resources Laboratory (Founding Members Consortium

    Iterative estimation of reflectivity and image texture: Least-squares migration with an empirical Bayes approach

    Get PDF
    In many geophysical inverse problems, smoothness assumptions on the underlying geology are used to mitigate the effects of nonuniqueness, poor data coverage, and noise in the data and to improve the quality of the inferred model parameters. Within a Bayesian inference framework, a priori assumptions about the probabilistic structure of the model parameters can impose such a smoothness constraint, analogous to regularization in a deterministic inverse problem. We have considered an empirical Bayes generalization of the Kirchhoff-based least-squares migration (LSM) problem. We have developed a novel methodology for estimation of the reflectivity model and regularization parameters, using a Bayesian statistical framework that treats both of these as random variables to be inferred from the data. Hence, rather than fixing the regularization parameters prior to inverting for the image, we allow the data to dictate where to regularize. Estimating these regularization parameters gives us information about the degree of conditional correlation (or lack thereof) between neighboring image parameters, and, subsequently, incorporating this information in the final model produces more clearly visible discontinuities in the estimated image. The inference framework is verified on 2D synthetic data sets, in which the empirical Bayes imaging results significantly outperform standard LSM images. We note that although we evaluated this method within the context of seismic imaging, it is in fact a general methodology that can be applied to any linear inverse problem in which there are spatially varying correlations in the model parameter space.MIT Energy Initiative (Shell International Exploration and Production B.V.)ERL Founding Member Consortiu

    Bayesian Neural Networks for Geothermal Resource Assessment: Prediction with Uncertainty

    Full text link
    We consider the application of machine learning to the evaluation of geothermal resource potential. A supervised learning problem is defined where maps of 10 geological and geophysical features within the state of Nevada, USA are used to define geothermal potential across a broad region. We have available a relatively small set of positive training sites (known resources or active power plants) and negative training sites (known drill sites with unsuitable geothermal conditions) and use these to constrain and optimize artificial neural networks for this classification task. The main objective is to predict the geothermal resource potential at unknown sites within a large geographic area where the defining features are known. These predictions could be used to target promising areas for further detailed investigations. We describe the evolution of our work from defining a specific neural network architecture to training and optimization trials. Upon analysis we expose the inevitable problems of model variability and resulting prediction uncertainty. Finally, to address these problems we apply the concept of Bayesian neural networks, a heuristic approach to regularization in network training, and make use of the practical interpretation of the formal uncertainty measures they provide.Comment: 27 pages, 12 figure

    Three years of Fermi GBM Earth Occultation Monitoring: Observations of Hard X-ray/Soft Gamma-Ray Sources

    Full text link
    The Gamma ray Burst Monitor (GBM) on board Fermi has been providing continuous data to the astronomical community since 2008 August 12. In this paper we present the results of the analysis of the first three years of these continuous data using the Earth occultation technique to monitor a catalog of 209 sources. From this catalog, we detect 99 sources, including 40 low-mass X-ray binary/neutron star systems, 31 high-mass X-ray binary neutron star systems, 12 black hole binaries, 12 active galaxies, 2 other sources, plus the Crab Nebula, and the Sun. Nine of these sources are detected in the 100-300 keV band, including seven black-hole binaries, the active galaxy Cen A, and the Crab. The Crab and Cyg X-1 are also detected in the 300-500 keV band. GBM provides complementary data to other sky-monitors below 100 keV and is the only all-sky monitor above 100 keV. Up-to-date light curves for all of the catalog sources can be found at http://heastro.phys.lsu.edu/gbm/.Comment: 24 pages, 12 figures, accepted for publication in ApJ

    When a Standard Candle Flickers

    Get PDF
    The Crab is the only bright steady source in the X-ray sky. The Crab consists of a pulsar wind nebula, a synchrotron nebula, and a cloud of expanding ejecta. On small scales, the Crab is extremely complex and turbulent. X-ray astronomers have often used the Crab as a standard candle to calibrate instruments, assuming its spectrum and overall flux remains constant over time. Four instruments (Fermi/GBM, RXTE/PCA, Swift/BAT, INTEGRAL/ISGRI) show a approx.5% (50 m Crab) decline in the Crab from 2008-2010. This decline appears to be larger with increasing energy and is not present in the pulsed flux, implying changes in the shock acceleration, electron population or magnetic field in the nebula. The Crab is known to be dynamic on small scales, so it is not too surprising that its total flux varies as well. Caution should be taken when using the Crab for in-orbit calibrations

    All-Sky Monitoring of Variable Sources with Fermi GBM

    Get PDF
    This slide presentation reviews the monitoring of variable sources with the Fermi Gamma Ray Burst Monitor (GBM). It reviews the use of the Earth Occultation technique, the observations of the Crab Nebula with the GBM, and the comparison with other satellite's observations. The instruments on board the four satellites indicate a decline in the Crab from 2008-2010

    Grid-search event location with non-Gaussian error models

    No full text
    This study employs an event location algorithm based on grid search to investigate the possibility of improving seismic event location accuracy by using non-Gaussian error models. The primary departure from the Gaussian error model that is considered is the explicit use of non-Gaussian probability distributions in defining optimal estimates of location parameters. Specifically, the class of generalized Gaussian distributions is considered, which leads to the minimization of L[subscript p] norms of arrival time residuals for arbitrary p≥1. The generalized Gaussian error models are implemented both with fixed standard errors assigned to the data and with an iterative reweighting of the data on a station/phase basis. An implicit departure from a Gaussian model is also considered, namely, the use of a simple outlier rejection criterion for disassociating arrivals during the grid-search process. These various mechanisms were applied to the ISC phase picks for the IWREF reference events, and the resulting grid-search solutions were compared to the GT locations of the events as well as the ISC solutions. The results indicate that event mislocations resulting from the minimization of L[subscript p] residual norms, with p near 1, are generally better than those resulting from the conventional L[subscript 2] norm minimization (Gaussian error assumption). However, this result did not always hold for mislocations in event depth. Further, outlier rejection and iterative reweighting, applied with L[subscript 2] minimization, performed nearly as well as L[subscript 1] minimization in some cases. The results of this study suggest that ISC can potentially improve its location capability with the use of global search methods and non-Gaussian error models. However, given the limitations of this study, further research, including the investigation of other statistical and optimization techniques not addressed here, is needed to assess this potential more completely

    Reviews

    No full text
    corecore